9 research outputs found

    Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Get PDF
    In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures

    The analysis of bodily gestures in response to music : methods for embodied music cognition based on machine learning

    Get PDF

    Beating-time gestures imitation learning for humanoid robots

    Get PDF
    Beating-time gestures are movement patterns of the hand swaying along with music, thereby indicating accented musical pulses. The spatiotemporal configuration of these patterns makes it diĂżcult to analyse and model them. In this paper we present an innovative modelling approach that is based upon imitation learning or Programming by Demonstration (PbD). Our approach - based on Dirichlet Process Mixture Models, Hidden Markov Models, Dynamic Time Warping, and non-uniform cubic spline regression - is particularly innovative as it handles spatial and temporal variability by the generation of a generalised trajectory from a set of periodically repeated movements. Although not within the scope of our study, our procedures may be implemented for the sake of controlling movement behaviour of robots and avatar animations in response to music

    The surprising character of music : a search for sparsity in music evoked body movements

    No full text
    The high dimensionality of music evoked movement data makes it difficult to uncover the fundamental aspects of human music-movement associations. However, modeling these data via Dirichlet process mixture (DPM) Models facilitates this task considerably. In this paper we present DPM models to investigate positional and directional aspects of music evoked bodily movement. In an experimental study subjects were moving spontaneously on a musical piece that was characterized by passages of extreme contrasts in physical acoustic energy. The contrasts in acoustic energy caused surprise and triggered new gestural behavior. We used sparsity as a key indicator for surprise and made it visible in two ways. Firstly as the result of a positional analysis using a Dirichlet process gaussian mixture model (DPGMM) and secondly as the result of a directional analysis using a Dirichlet process multinomial mixture model (DPMMM). The results show that gestural response follows the surprising or unpredictable character of the music

    Toward E-motion based music retrieval: a study of affective gesture recognition

    No full text
    The widespread availability of digitized music collections and mobile music players have enabled us to listen to music during many of our daily activities, such as physical exercise, commuting, relaxation, and many people enjoy this. A practical problem that comes along with the wish to listen to music is that of music retrieval, the selection of desired music from a music collection. In this paper, we propose a new approach to facilitate music retrieval. Modern smart phones are commonly used as music players and are already equipped with inertial sensors that are suitable for obtaining motion information. In the proposed approach, emotion is derived automatically from arm gestures and is used to query a music collection. We derive predictive models for valence and arousal from empirical data, gathered in an experimental setup where inertial data recorded from arm movements are coupled to musical emotion. Part of the experiment is a preliminary study confirming that human subjects are generally capable of recognizing affect from arm gestures. Model validation in the main study confirmed the predictive capabilities of the models

    Expressive body movement responses to music are coherent, consistent, and low dimensional

    No full text
    Embodied music cognition stresses the role of the human body as mediator for the encoding and decoding of musical expression. In this paper, we set up a low dimensional functional model that accounts for 70% of the variability in the expressive body movement responses to music. With the functional principal component analysis, we modeled individual body movements as a linear combination of a group average and a number of eigenfunctions. The group average and the eigenfunctions are common to all subjects and make up what we call the commonalities. An individual performance is then characterized by a set of scores (the individualities), one score per eigenfunction. The model is based on experimental data which finds high levels of coherence/consistency between participants when grouped according to musical education. This shows an ontogenetic effect. Participants without formal musical education focus on the torso for the expression of basic musical structure (tempo). Musically trained participants decode additional structural elements in the music and focus on body parts having more degrees of freedom (such as the hands). Our results confirm earlier studies that different body parts move differently along with the music

    The conducting master: an interactive, real-time gesture monitoring system based on spatiotemporal motion templates

    No full text
    Research in the field of embodied music cognition has shown the importance of coupled processes of body activity (action) and multimodal percepts of these actions (perception) in how music is processed. Technologies in the field of human- computer interaction (HCI) provide excellent means to intervene into, and extend these coupled action-perception processes. In this article we apply this model to a concrete HCI application, called the “Conducting Master”. The application facilitates multiple users to interact in real time with the system through conducting gestures, made in synchrony with the musical meter (i.e., beating-time gestures). Techniques are provided to model and automatically recognize these gestures in order to provide multimodal feedback streams back to the users. These techniques are based on template-based methods that allow approaching beating-time gestures explicitly from a spatiotem- poral account. To conclude, we present some concrete setups in which the functionality of the Conducting Master was evaluated
    corecore